在本文中,我们认为由于专家的昂贵的像素级注释以及大量未经发布的正常和异常图像扫描,近年来近年来引起了近年来越来越多的注意力的问题。我们介绍了一个分割网络,该分割网络利用对抗学习将图像分成两种切割,其中一个落入用户提供的参考分布。这种基于对抗的选择性切割网络(ASC-Net)桥接基于簇的深度分割和基于对抗基于对抗的异常/新奇检测算法的两个域。我们的ASC网络从正常和异常的医疗扫描中学到医疗扫描中的分段异常,没有任何掩盖的监督。我们在三个公共数据集中评估这一无监督的异常分段模型,即脑肿瘤细分的Brats 2019,肝脏病变分割和脑病变细分的MS-SEG 2015,以及脑肿瘤细分的私人数据集。与现有方法相比,我们的模型展示了无监督异常分段任务中的巨大性能增益。虽然与监督学习算法相比,仍有进一步提高性能的空间,但有希望的实验结果和有趣的观察揭示了使用用户定义的知识构建无监督学习算法的医疗异常识别。
translated by 谷歌翻译
互动和非交互式模型是基于向量的交叉信息检索(V-CLIR)中的两个De-Facto标准框架,其分别以同步和异步方式嵌入查询和文档。从检索准确性和计算效率的角度来看,每个型号都有自己的优越性和缺点。在本文中,我们提出了一种新颖的框架来利用这两个范式的优势。具体地,我们介绍了半交互式机制,它在非交互式架构上构建了我们的模型,但将每个文档与其相关的多语言查询一起编码。因此,可以更好地学习交互式模型的交叉特征。此外,我们通过重用其单词嵌入和采用知识蒸馏来进一步将知识从训练有素的互动模型转移到我们的。我们的模型是从多语言预先训练的语言模型M-BERT初始化的,并在从维基百科和从现实世界搜索引擎收集的内部数据集进行评估。广泛的分析表明,我们的方法在保持计算效率的同时显着提高了检索准确性。
translated by 谷歌翻译
Systems for knowledge-intensive tasks such as open-domain question answering (QA) usually consist of two stages: efficient retrieval of relevant documents from a large corpus and detailed reading of the selected documents to generate answers. Retrievers and readers are usually modeled separately, which necessitates a cumbersome implementation and is hard to train and adapt in an end-to-end fashion. In this paper, we revisit this design and eschew the separate architecture and training in favor of a single Transformer that performs Retrieval as Attention (ReAtt), and end-to-end training solely based on supervision from the end QA task. We demonstrate for the first time that a single model trained end-to-end can achieve both competitive retrieval and QA performance, matching or slightly outperforming state-of-the-art separately trained retrievers and readers. Moreover, end-to-end adaptation significantly boosts its performance on out-of-domain datasets in both supervised and unsupervised settings, making our model a simple and adaptable solution for knowledge-intensive tasks. Code and models are available at https://github.com/jzbjyb/ReAtt.
translated by 谷歌翻译
This paper describes the submission of the RoyalFlush neural machine translation system for the WMT 2022 translation efficiency task. Unlike the commonly used autoregressive translation system, we adopted a two-stage translation paradigm called Hybrid Regression Translation (HRT) to combine the advantages of autoregressive and non-autoregressive translation. Specifically, HRT first autoregressively generates a discontinuous sequence (e.g., make a prediction every $k$ tokens, $k>1$) and then fills in all previously skipped tokens at once in a non-autoregressive manner. Thus, we can easily trade off the translation quality and speed by adjusting $k$. In addition, by integrating other modeling techniques (e.g., sequence-level knowledge distillation and deep-encoder-shallow-decoder layer allocation strategy) and a mass of engineering efforts, HRT improves 80\% inference speed and achieves equivalent translation performance with the same-capacity AT counterpart. Our fastest system reaches 6k+ words/second on the GPU latency setting, estimated to be about 3.1x faster than the last year's winner.
translated by 谷歌翻译
Arbitrary style transfer (AST) transfers arbitrary artistic styles onto content images. Despite the recent rapid progress, existing AST methods are either incapable or too slow to run at ultra-resolutions (e.g., 4K) with limited resources, which heavily hinders their further applications. In this paper, we tackle this dilemma by learning a straightforward and lightweight model, dubbed MicroAST. The key insight is to completely abandon the use of cumbersome pre-trained Deep Convolutional Neural Networks (e.g., VGG) at inference. Instead, we design two micro encoders (content and style encoders) and one micro decoder for style transfer. The content encoder aims at extracting the main structure of the content image. The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations. In addition, to boost the ability of the style encoder to extract more distinct and representative style signals, we also introduce a new style signal contrastive loss in our model. Compared to the state of the art, our MicroAST not only produces visually superior results but also is 5-73 times smaller and 6-18 times faster, for the first time enabling super-fast (about 0.5 seconds) AST at 4K ultra-resolutions. Code is available at https://github.com/EndyWon/MicroAST.
translated by 谷歌翻译
Among current anchor-based detectors, a positive anchor box will be intuitively assigned to the object that overlaps it the most. The assigned label to each anchor will directly determine the optimization direction of the corresponding prediction box, including the direction of box regression and category prediction. In our practice of crowded object detection, however, the results show that a positive anchor does not always regress toward the object that overlaps it the most when multiple objects overlap. We name it anchor drift. The anchor drift reflects that the anchor-object matching relation, which is determined by the degree of overlap between anchors and objects, is not always optimal. Conflicts between the fixed matching relation and learned experience in the past training process may cause ambiguous predictions and thus raise the false-positive rate. In this paper, a simple but efficient adaptive two-stage anchor assignment (TSAA) method is proposed. It utilizes the final prediction boxes rather than the fixed anchors to calculate the overlap degree with objects to determine which object to regress for each anchor. The participation of the prediction box makes the anchor-object assignment mechanism adaptive. Extensive experiments are conducted on three classic detectors RetinaNet, Faster-RCNN and YOLOv3 on CrowdHuman and COCO to evaluate the effectiveness of TSAA. The results show that TSAA can significantly improve the detectors' performance without additional computational costs or network structure changes.
translated by 谷歌翻译
A key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance. Although this assumption covers all light-tailed (i.e., sub-exponential) and some heavy-tailed noise distributions (e.g., log-normal, Weibull, and some Pareto distributions), it fails for many fat-tailed noise distributions (i.e., ``heavier-tailed'' with potentially infinite variance) that have been empirically observed in the FL literature. To date, it remains unclear whether one can design convergent algorithms for FL systems that experience fat-tailed noise. This motivates us to fill this gap in this paper by proposing an algorithmic framework called FAT-Clipping (\ul{f}ederated \ul{a}veraging with \ul{t}wo-sided learning rates and \ul{clipping}), which contains two variants: FAT-Clipping per-round (FAT-Clipping-PR) and FAT-Clipping per-iteration (FAT-Clipping-PI). Specifically, for the largest $\alpha \in (1,2]$ such that the fat-tailed noise in FL still has a bounded $\alpha$-moment, we show that both variants achieve $\mathcal{O}((mT)^{\frac{2-\alpha}{\alpha}})$ and $\mathcal{O}((mT)^{\frac{1-\alpha}{3\alpha-2}})$ convergence rates in the strongly-convex and general non-convex settings, respectively, where $m$ and $T$ are the numbers of clients and communication rounds. Moreover, at the expense of more clipping operations compared to FAT-Clipping-PR, FAT-Clipping-PI further enjoys a linear speedup effect with respect to the number of local updates at each client and being lower-bound-matching (i.e., order-optimal). Collectively, our results advance the understanding of designing efficient algorithms for FL systems that exhibit fat-tailed first-order oracle information.
translated by 谷歌翻译
To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning. However, due to the more complicated inter-outer problem structure in federated min-max learning, theoretical understandings of communication complexity for federated min-max learning with infrequent communications remain very limited in the literature. This is particularly true for settings with non-i.i.d. datasets and partial client participation. To address this challenge, in this paper, we propose a new algorithmic framework called stochastic sampling averaging gradient descent ascent (SAGDA), which i) assembles stochastic gradient estimators from randomly sampled clients as control variates and ii) leverages two learning rates on both server and client sides. We show that SAGDA achieves a linear speedup in terms of both the number of clients and local update steps, which yields an $\mathcal{O}(\epsilon^{-2})$ communication complexity that is orders of magnitude lower than the state of the art. Interestingly, by noting that the standard federated stochastic gradient descent ascent (FSGDA) is in fact a control-variate-free special version of SAGDA, we immediately arrive at an $\mathcal{O}(\epsilon^{-2})$ communication complexity result for FSGDA. Therefore, through the lens of SAGDA, we also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning.
translated by 谷歌翻译
磁共振成像(MRI)图像中的小病变对于多种疾病的临床诊断至关重要。但是,MRI质量很容易被各种噪声降解,这可以极大地影响小病变的诊断准确性。尽管已经提出了一些用于降级MR图像的方法,但缺乏提高特定于任务的降级方法来提高小病变的诊断信心。在这项工作中,我们建议通过体素杂种残留MLP-CNN模型来降低具有小病变的三维(3D)MR图像。我们结合了基本的深度学习体系结构MLP和CNN,以获得适当的固有偏差,以通过添加残差连接来利用远距离信息,以使图像降低并整合MLP和CNN中的每个输出层。我们在720 T2-Flair脑图像上评估了所提出的方法,其在不同的噪声水平下具有较小的病变。结果表明,与最先进的方法相比,在定量和视觉评估中,我们的方法在测试数据集上具有优势。此外,两名经验丰富的放射科医生同意,在中等和高噪声水平下,我们的方法在恢复小病变和整体图像质量方面优于其他方法。我们的方法的实现可在https://github.com/laowangbobo/Residual_MLP_CNN_MIXER上获得。
translated by 谷歌翻译
近年来,基于注意力的场景文本识别方法非常受欢迎,并吸引了许多研究人员的兴趣。基于注意力的方法可以将注意力集中在解码过程中的小区域甚至单点上,其中注意矩阵几乎是一个旋转分布。此外,在推断过程中,所有注意力矩阵都将加权整个特征地图,从而导致巨大的冗余计算。在本文中,我们提出了一个用于场景文本识别的有效无注意的单点解码网络(称为SPDN),该网络可以取代传统的基于注意力的解码网络。具体而言,我们建议单点采样模块(SPSM)有效地在特征映射上为解码一个字符的一个关键点采样。这样,我们的方法不仅可以精确地找到每个字符的关键点,还可以删除冗余计算。基于SPSM,我们设计了一个高效且新颖的单点解码网络,以替代基于注意力的解码网络。对公开基准测试的广泛实验证明,我们的SPDN可以大大提高解码效率而不牺牲性能。
translated by 谷歌翻译